The urgent demand for a reliable firearm detection system arises from the escalating concerns surrounding public safety, particularly in settings where large numbers of people gather, such as schools, hospitals, and densely populated public areas. Instances of violence involving firearms have underscored the critical necessity for effective detection methods to mitigate risks and safeguard individuals.
This issue presents an intriguing challenge due to the opportunity it offers to leverage cutting-edge technology, notably deep learning algorithms, to bolster public safety measures. Deep learning, a subset of artificial intelligence, holds immense potential in analyzing complex data patterns, making it well-suited for firearm detection tasks. By harnessing the capabilities of deep learning, we can develop systems that can accurately identify firearms in real-time, enabling swift responses to potential threats.
Furthermore, the development of weapon detection systems not only addresses the immediate need for enhanced security but also reflects a broader commitment to proactive risk management and crisis prevention. By efficiently identifying individuals carrying dangerous weapons, these systems empower security personnel to intervene before incidents escalate, thereby reducing the likelihood of harm and promoting a safer environment for all.
library(EBImage)
library(keras)
##
## Attaching package: 'keras'
## The following object is masked from 'package:EBImage':
##
## normalize
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.0 ✔ stringr 1.5.1
## ✔ ggplot2 3.5.0 ✔ tibble 3.2.1
## ✔ lubridate 1.9.3 ✔ tidyr 1.3.1
## ✔ purrr 1.0.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::combine() masks EBImage::combine()
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ✖ purrr::transpose() masks EBImage::transpose()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
The above code snippet aims to read image files from a specified directory into a list, likely for further processing or analysis related to firearm detection.
model <- load_model_hdf5("model.h5")
history
## function (max.show = 25, reverse = FALSE, pattern, ...)
## {
## file1 <- tempfile("Rrawhist")
## savehistory(file1)
## rawhist <- readLines(file1)
## unlink(file1)
## if (!missing(pattern))
## rawhist <- unique(grep(pattern, rawhist, value = TRUE,
## ...))
## nlines <- length(rawhist)
## if (nlines) {
## inds <- max(1, nlines - max.show):nlines
## if (reverse)
## inds <- rev(inds)
## }
## else inds <- integer()
## file2 <- tempfile("hist")
## writeLines(rawhist[inds], file2)
## file.show(file2, title = "R History", delete.file = TRUE)
## }
## <bytecode: 0x0000021df803c878>
## <environment: namespace:utils>
history <- read_rds("history.rds")
class_labels <- read_rds("class_labels.rds")
myimages <- read_rds("myimages.rds")
aimages <- read_rds("aimages.rds")
trainx <- NULL
testx <- NULL
for (i in 1:60) {trainx <- rbind(trainx,myimages[[i]])}
for (i in 69:114) {trainx <- rbind(trainx,myimages[[i]])}
for (i in 61:68) {testx <- rbind(testx,myimages[[i]])}
for (i in 115:119) {testx <- rbind(testx,myimages[[i]])}
trainy <- NULL
testy <- NULL
for (i in 1:60) {trainy <- rbind(trainy,class_labels[[i]])}
for (i in 69:114) {trainy <- rbind(trainy,class_labels[[i]])}
for (i in 61:68) {testy <- rbind(testy,class_labels[[i]])}
for (i in 115:119) {testy <- rbind(testy,class_labels[[i]])}
trainLabels <- to_categorical(trainy)
testLables <- to_categorical(testy)
library(ggplot2)
# Create a data frame for training and validation metrics
history_df <- data.frame(
epoch = 1:length(history$metrics$loss),
loss = history$metrics$loss,
val_loss = history$metrics$val_loss,
accuracy = history$metrics$accuracy,
val_accuracy = history$metrics$val_accuracy
)
This data frame will be used for visualization purposes, likely to plot the training and validation metrics (loss and accuracy) over the epochs using ggplot2. It allows for easy visualization and comparison of the model’s performance during training.
# Melt the data frame for easier plotting
library(reshape2)
##
## Attaching package: 'reshape2'
## The following object is masked from 'package:tidyr':
##
## smiths
history_df <- melt(history_df, id.vars = "epoch")
# Plot loss
loss_plot <- ggplot(history_df, aes(x = epoch, y = value, color = variable)) +
geom_line() +
labs(title = "Training and Validation Loss",
x = "Epochs",
y = "Loss") +
theme_minimal() +
theme(legend.position = "bottom")
The above code block focuses on visualizing the training and validation loss over the epochs, allowing for a visual assessment of how the loss changes during the training process.
# Plot accuracy
accuracy_plot <- ggplot(history_df, aes(x = epoch, y = value, color = variable)) +
geom_line() +
labs(title = "Training and Validation Accuracy",
x = "Epochs",
y = "Accuracy") +
theme_minimal() +
theme(legend.position = "bottom")
# Display the plots
loss_plot
accuracy_plot
plot(history)
The above code block generates and displays two plots: one for training
and validation loss and another for training and validation accuracy,
allowing for visual assessment of these metrics over the epochs of
training.
model %>% evaluate(trainx,trainLabels)
## 4/4 - 0s - loss: 0.3573 - accuracy: 0.8113 - 211ms/epoch - 53ms/step
## loss accuracy
## 0.3573050 0.8113208
# Assuming your model is named 'model'
# Predict probabilities for each class
pred_prob <- model %>% predict(trainx)
## 4/4 - 0s - 104ms/epoch - 26ms/step
# Threshold the predicted probabilities
threshold <- 0.5
tpredictions <- ifelse(pred_prob[, 2] > threshold, 1, 0)
# Print the predictions
print(tpredictions)
## [1] 1 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 0 1 1 0 1 0 1 0 0 1 1 0 1 1 1 1 1 0 1 1 1
## [38] 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## [75] 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0
The code evaluates the model’s performance on the training data and makes predictions on the same data, providing insights into how well the model performs and what classes it predicts for each sample.
table(Predicted = tpredictions,Actual = trainy)
## Actual
## Predicted 0 1
## 0 42 16
## 1 4 44
model %>% evaluate(testx,testLables)
## 1/1 - 0s - loss: 0.6039 - accuracy: 0.5385 - 31ms/epoch - 31ms/step
## loss accuracy
## 0.6039302 0.5384616
The table above reveals a presence of a few false negatives and highlights an accuracy of 0.8462 with a loss of 0.3780, as demonstrated by the model evaluation.
pred_prob <- model %>% predict(testx)
## 1/1 - 0s - 22ms/epoch - 22ms/step
# Threshold the predicted probabilities
threshold <- 0.5
predictions <- ifelse(pred_prob[, 2] > threshold, 1, 0)
# Print the predictions
print(predictions)
## [1] 0 0 0 0 0 1 1 0 0 0 0 0 0
The vector above represents binary predictions for the test data points. These predictions are derived by comparing the probability of belonging to the positive class (class 1) against a threshold of 0.5. When the probability of the positive class exceeds 0.5, the prediction is set to 1; otherwise, it is set to 0.
# Define the index column sequence
index_sequence <- c(61:68, 115:119)
# Repeat the index sequence to match the desired number of rows
num_rows <- 13 # Change this to the desired number of rows
index_column <- rep(index_sequence, length.out = num_rows)
# Combine the predicted probabilities, predictions, actual labels, and index column
result <- cbind(index = index_column, pred_prob, Predit = predictions, Actual = testy)
result
## index Predit
## [1,] 61 0.7434713 2.565286e-01 0 1
## [2,] 62 0.7742945 2.257055e-01 0 1
## [3,] 63 0.8028821 1.971179e-01 0 1
## [4,] 64 0.7924243 2.075758e-01 0 1
## [5,] 65 0.8475093 1.524907e-01 0 1
## [6,] 66 0.1092069 8.907930e-01 1 1
## [7,] 67 0.1603232 8.396768e-01 1 1
## [8,] 68 0.5553893 4.446107e-01 0 1
## [9,] 115 0.8596473 1.403527e-01 0 0
## [10,] 116 0.9993661 6.339109e-04 0 0
## [11,] 117 0.7090459 2.909541e-01 0 0
## [12,] 118 0.9999695 3.052551e-05 0 0
## [13,] 119 0.9829113 1.708869e-02 0 0
The code provided establishes an index column sequence containing the predefined range of numbers. It then repeats this sequence to match the desired number of rows, creating an index column. This index column is intended for use in combining data, facilitating easy tracking and referencing of specific observations within the dataset.
display(aimages[[115]])
Below is the image corresponding to the dataset.
# Define labels
labels <- c("Small Gun", "Long Gun")
matching_indices <- which(predictions == testy)
matching_indices
## [1] 6 7 9 10 11 12 13
# Find indices where predictions match actual labels
labels <- c("Small Gun", "Long Gun")
matching_indices1 <- which(predictions != testy)
matching_indices1
## [1] 1 2 3 4 5 8
result[matching_indices, "index"]
## [1] 66 67 115 116 117 118 119
The above code segment establishes two classes of labels: ‘Small Gun’ and ‘Long Gun’. It then identifies the indices where the predicted labels match the actual labels, stored in the variable ‘matching_indices’. Conversely, the variable ‘matching_indices1’ captures indices where the predicted labels do not align with the actual labels.
# Display images where predictions match actual labels along with their labels and corresponding index values
for (i in matching_indices) {
# Get the corresponding index value
index_value <- result[i, "index"]
# Plot the image
plot(aimages[[index_value]])
# Get the actual and predicted labels
actual_label <- labels[testy[i] + 1] # +1 to index labels properly
predicted_label <- labels[predictions[i] + 1] # +1 to index labels properly
# Add title with actual and predicted labels along with the index value
title(paste("Image", i, "(Index:", index_value, "): Actual =", actual_label, ", Predicted =", predicted_label))
}
# Display images where predictions match actual labels along with their labels and corresponding index values
for (i in matching_indices1) {
# Get the corresponding index value
index_value <- result[i, "index"]
# Plot the image
plot(aimages[[index_value]])
# Get the actual and predicted labels
actual_label <- labels[testy[i] + 1] # +1 to index labels properly
predicted_label <- labels[predictions[i] + 1] # +1 to index labels properly
# Add title with actual and predicted labels along with the index value
title(paste("Image", i, "(Index:", index_value, "): Actual =", actual_label, ", Predicted =", predicted_label))
}
The above code displays images along with their respective indices,
while also indicating the actual and predicted labels.